Start small: Training controllable game level generators without training data by learning at multiple sizes
نویسندگان
چکیده
A level generator is a tool that generates game levels from noise. Training without dataset suffers feedback sparsity, since it unlikely to generate playable via random exploration. common solution shaped rewards, which guides the achieve subgoals towards playability, but they consume effort design and require game-specific domain knowledge. This paper proposes novel approach train generators datasets or rewards by learning at multiple sizes starting small up desired sizes. The denser negates need for rewards. Additionally, learn build various sizes, including were not trained for. We apply our recurrent auto-regressive generative flow networks (GFlowNets) controllable generation. also adapt diversity sampling be compatible with GFlowNets. results show create diverse Sokoban, Zelda, Danger Dave. When compared reinforcement better controllability competitive diversity, while being 9x faster training
منابع مشابه
Learning without Training
Achieving high-level skills is generally considered to require intense training, which is thought to optimally engage neuronal plasticity mechanisms. Recent work, however, suggests that intensive training may not be necessary for skill learning. Skills can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulat...
متن کاملusing game theory techniques in self-organizing maps training
شبکه خود سازمانده پرکاربردترین شبکه عصبی برای انجام خوشه بندی و کوانتیزه نمودن برداری است. از زمان معرفی این شبکه تاکنون، از این روش در مسائل مختلف در حوزه های گوناگون استفاده و توسعه ها و بهبودهای متعددی برای آن ارائه شده است. شبکه خودسازمانده از تعدادی سلول برای تخمین تابع توزیع الگوهای ورودی در فضای چندبعدی استفاده می کند. احتمال وجود سلول مرده مشکلی اساسی در الگوریتم شبکه خودسازمانده به حسا...
Training Deep Learning based Denoisers without Ground Truth Data
Recent deep learning based denoisers are trained to minimize the mean squared error (MSE) between the output of a network and the ground truth noiseless image in the training data. Thus, it is crucial to have high quality noiseless training data for high performance denoisers. Unfortunately, in some application areas such as medical imaging, it is expensive or even infeasible to acquire such a ...
متن کاملMgan: Training Generative Adversarial Nets with Multiple Generators
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing probl...
متن کاملLearning Image Transformations without Training Examples
The use of image transformations is essential for efficient modeling and learning of visual data. But the class of relevant transformations is large: affine transformations, projective transformations, elastic deformations, ... the list goes on. Therefore, learning these transformations, rather than hand coding them, is of great conceptual interest. To the best of our knowledge, all the related...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: alexandria engineering journal
سال: 2023
ISSN: ['2090-2670', '1110-0168']
DOI: https://doi.org/10.1016/j.aej.2023.04.019